Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apbf: Switch to fast reduce method. #2584

Merged
merged 1 commit into from
Feb 4, 2021
Merged

Conversation

davecgh
Copy link
Member

@davecgh davecgh commented Feb 2, 2021

This requires #2579.

This modifies the modular reduction step to make use of the same fast reduce mechanism used in the gcs filters. As can be seen in the following benchmark results, it more than doubles the performance of the primary filter operations.

In order to experimentally validate the mapping adheres to the theoretical results and doesn't have any adverse effects on the false positive rates, the same validation methodology described in README.md was conducted again and said README is updated accordingly. Everything is well within the margin of error as expected.

Finally, the README is also updated with the new benchmark results and the required go version is bumped in the go.mod due to the addition of the math/bits import.

name                     old time/op    new time/op  delta
-----------------------------------------------------------------------------
capacity=1000, fprate=0.1%
--------------------------
BenchmarkAdd             158ns ± 1%      59ns ± 1%   -62.59%  (p=0.008 n=5+5)
BenchmarkContainsTrue    183ns ± 1%      69ns ± 2%   -62.27%  (p=0.008 n=5+5)
BenchmarkContainsFalse   61.3ns ±40%   42.0ns ±26%   -31.41%  (p=0.032 n=5+5)

capacity=1000, fprate=0.01%
---------------------------
BenchmarkAdd              211ns ± 1%     69ns ± 1%   -67.07%  (p=0.008 n=5+5)
BenchmarkContainsTrue     236ns ± 2%     80ns ± 1%   -66.16%  (p=0.008 n=5+5)
BenchmarkContainsFalse   59.6ns ±24%    37.7ns ± 5%  -36.74%  (p=0.008 n=5+5)
BenchmarkReset

capacity=1000, fprate=0.001%
----------------------------
BenchmarkAdd             247ns ± 0%       78ns ± 2%  -68.32%  (p=0.008 n=5+5)
BenchmarkContainsTrue    272ns ± 1%       89ns ± 1%  -67.50%  (p=0.008 n=5+5)
BenchmarkContainsFalse   58.6ns ±26%    37.0ns ± 4%  -36.98%  (p=0.008 n=5+5)

capacity=100000, fprate=0.01%
-----------------------------
BenchmarkAdd              205ns ± 2%      80ns ± 1%  -61.12%  (p=0.008 n=5+5)
BenchmarkContainsTrue     219ns ± 1%      80ns ± 1%  -63.39%  (p=0.008 n=5+5)
BenchmarkContainsFalse   70.3ns ±46%    37.6ns ±10%  -46.61%  (p=0.008 n=5+5)

capacity=100000, fprate=0.0001%
-------------------------------
BenchmarkAdd              275ns ± 2%     110ns ± 1%  -60.10%  (p=0.008 n=5+5)
BenchmarkContainsTrue     287ns ± 1%      98ns ± 1%  -65.98%  (p=0.008 n=5+5)
BenchmarkContainsFalse   56.6ns ±45%    36.3ns ± 6%  -35.93%  (p=0.008 n=5+5)

capacity=100000, fprate=0.00001%
--------------------------------
BenchmarkAdd             413ns ± 3%     205ns ± 2%   -50.41%  (p=0.016 n=5+5)

@davecgh davecgh added this to the 1.7.0 milestone Feb 2, 2021
Copy link
Member

@rstaudt2 rstaudt2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, that is an interesting technique!

This modifies the modular reduction step to make use of the same fast
reduce mechanism used in the gcs filters.  As can be seen in the
following benchmark results, it more than doubles the performance of the
primary filter operations.

In order to experimentally validate the mapping adheres to the
theoretical results and doesn't have any adverse effects on the false
positive rates, the same validation methodology described in README.md
was conducted again and said README is updated accordingly.  Everything
is well within the margin of error as expected.

Finally, the README is also updated with the new benchmark results and
the required go version is bumped in the go.mod due to the addition of
the math/bits import.

name                     old time/op    new time/op  delta
-----------------------------------------------------------------------------
capacity=1000, fprate=0.1%
--------------------------
BenchmarkAdd             158ns ± 1%      59ns ± 1%   -62.59%  (p=0.008 n=5+5)
BenchmarkContainsTrue    183ns ± 1%      69ns ± 2%   -62.27%  (p=0.008 n=5+5)
BenchmarkContainsFalse   61.3ns ±40%   42.0ns ±26%   -31.41%  (p=0.032 n=5+5)

capacity=1000, fprate=0.01%
---------------------------
BenchmarkAdd              211ns ± 1%     69ns ± 1%   -67.07%  (p=0.008 n=5+5)
BenchmarkContainsTrue     236ns ± 2%     80ns ± 1%   -66.16%  (p=0.008 n=5+5)
BenchmarkContainsFalse   59.6ns ±24%    37.7ns ± 5%  -36.74%  (p=0.008 n=5+5)
BenchmarkReset

capacity=1000, fprate=0.001%
----------------------------
BenchmarkAdd             247ns ± 0%       78ns ± 2%  -68.32%  (p=0.008 n=5+5)
BenchmarkContainsTrue    272ns ± 1%       89ns ± 1%  -67.50%  (p=0.008 n=5+5)
BenchmarkContainsFalse   58.6ns ±26%    37.0ns ± 4%  -36.98%  (p=0.008 n=5+5)

capacity=100000, fprate=0.01%
-----------------------------
BenchmarkAdd              205ns ± 2%      80ns ± 1%  -61.12%  (p=0.008 n=5+5)
BenchmarkContainsTrue     219ns ± 1%      80ns ± 1%  -63.39%  (p=0.008 n=5+5)
BenchmarkContainsFalse   70.3ns ±46%    37.6ns ±10%  -46.61%  (p=0.008 n=5+5)

capacity=100000, fprate=0.0001%
-------------------------------
BenchmarkAdd              275ns ± 2%     110ns ± 1%  -60.10%  (p=0.008 n=5+5)
BenchmarkContainsTrue     287ns ± 1%      98ns ± 1%  -65.98%  (p=0.008 n=5+5)
BenchmarkContainsFalse   56.6ns ±45%    36.3ns ± 6%  -35.93%  (p=0.008 n=5+5)

capacity=100000, fprate=0.00001%
--------------------------------
BenchmarkAdd             413ns ± 3%     205ns ± 2%  -50.41%  (p=0.016 n=5+5)
@davecgh davecgh merged commit e2c78f8 into decred:master Feb 4, 2021
@davecgh davecgh deleted the apbf_fast_reduce branch February 4, 2021 08:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants